Learning a regression function via Tikhonov regularization

نویسنده

  • Sameer M. Jalnapurkar
چکیده

We consider the problem of estimating a regression function on the basis of empirical data. We use a Reproducing Kernel Hilbert Space (RKHS) as our hypothesis space, and we follow the methodology of Tikhonov regularization. We show that this leads to a learning scheme that is different from the one usually considered in Learning Theory. Subject to some regularity assumptions on the regression function, our scheme yields an asymptotic rate of convergence in the RKHS norm that is almost as good as O(l), where l is the number of data points.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regression tasks in machine learning via Fenchel duality

Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach for regression is investigated under a theoretical point of view that makes use of convex analysis and Fenchel duality. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex o...

متن کامل

A numerical approach for solving a nonlinear inverse diusion problem by Tikhonov regularization

In this paper, we propose an algorithm for numerical solving an inverse non-linear diusion problem. In additional, the least-squares method is adopted tond the solution. To regularize the resultant ill-conditioned linear system ofequations, we apply the Tikhonov regularization method to obtain the stablenumerical approximation to the solution. Some numerical experiments con-rm the utility of th...

متن کامل

A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization

In the framework of supervised learning we prove that the iterative algorithm introduced in Umanità and Villa (2010) allows to estimate in a consistent way the relevant features of the regression function under the a priori assumption that it admits a sparse representation on a fixed dictionary.

متن کامل

Supervised Scale-Regularized Linear Convolutionary Filters

We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not b...

متن کامل

Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations

Estimating the generalization performance of learning algorithms is one of the main purposes of machine learning theoretical research. The previous results describing the generalization ability of Tikhonov regularization algorithm are almost all based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the bound on...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008